Climate: Sea Surface Temperature

A Healthy Waters Partnership Analysis

This script analyses and presents sea surface temperature data in the Northern Three reporting regions. The output of this is used in the Northern Three technical reports.
Author

Adam Shand

Published

January 27, 2026

Introduction

This script contains the methods used to wrangle, analyse and present sea surface temperature data in the Northern Three regions. For a guide on downloading sst data refer to the README document for the Spatial Analysis GitHub repo. Note that SST data should be downloaded automatically by this script.

Sea surface temperature data is predominantly used within the climate section of the technical report to “set the scene” for each basin in each region. SST data works in combination with the DHW data to provide and understand of heat stress in the marine environment. The main objectives of this script are to:

  • Automatically download all required SST data.
  • Create a key table and summary statistics table.
  • Create a simplified monthly percentiles table for the technical report.
  • Plot long term sst data for each region.
  • Plot the current year of sst data for each region.
  • Map the current year of sst data for each region.
  • Map the anomaly of the current year of sst data from the long term mean.

Script Set Up

This script requires multiple core spatial packages, each of which is loaded below. Key variables and paths are also created.

#use pacman function to load and install (if required) all other packages
pacman::p_load(dplyr, tidyr, readr, purrr, stars, glue, here, sf, tmap, RColorBrewer, ggplot2, ncdf4, lubridate)

#install/load the custom RcTools package
#pak::pak("add-am/RcTools")
library(RcTools)

#define the script crs and target year
script_crs <- "EPSG:7855"
script_fyear <- 2025
script_fac <- 5

sf_use_s2(FALSE)

#build the data path and output path variables
data_path <- here("data/sst/")
output_path <- here(glue("outputs/sst/{script_fyear}/"))

#and create some additional sub folders
dir.create(glue("{data_path}/raw/"), recursive = TRUE)
dir.create(glue("{data_path}/processed/"))
dir.create(glue("{output_path}/plots/"), recursive = TRUE)
dir.create(glue("{output_path}/maps/"))

Load Data

Now the script is set up we need to load in all of the required datasets. This will be broken into two segments:

  • Spatial data specific to the N3 region - such as the region, zone, and sub zone boundaries.
  • Sea Surface Temperature data

Spatial Data

The northern three boundaries are built using a custom function or reloaded from file.

#if the data is on file, load it, otherwise built it and save it for next time
if (file.exists(here("data/n3_region.gpkg"))){
  n3_region <- st_read(here("data/n3_region.gpkg"))
} else {
  n3_region <- build_n3_region()
  st_write(n3_region, here("data/n3_region.gpkg"))
}

The dataset is then divided into sections for ease of use later on.

#cut down the n3 dataset to just the marine region
n3_marine <- n3_region |> 
  filter(Environment == "Marine",
         BasinOrZone != "Burdekin Marine",
         Region != "Burdekin") |> 
  rename(Zone = BasinOrZone) |> 
  mutate(Region = case_when(Region == "Burdekin" ~ "Dry Tropics",
                           T ~ Region)) |> 
  group_by(Region) |> summarise(geom = st_union(geom)) |> 
  ungroup() |> st_cast()

#cut down to everything but the marine region
n3_basins <- n3_region |>
  filter(Environment != "Marine") |> 
  rename(Basin = BasinOrZone) |> group_by(Region, Basin) |> summarise(geom = st_union(geom)) |> 
  ungroup() |> st_cast()
NoteWT Request

A special request from the WT team is to use their marine boundary, rather than the marine boundary as defined by the NRM groups. Thus below we need to replace to NRM WT boundary with the custom version. Note that this outline was provided by the WT team and has no online equivalent.

#
#read in the custom WT outline and edit to match the main dataset
wt_custom <- st_read(here("data/historical_wt_boundary.gpkg")) |> 
  name_cleaning() |> 
  select(Name) |> 
  rename(Region = Name)

#remove old WT and add new
n3_marine <- n3_marine |> 
  filter(Region != "Wet Tropics") |> 
  bind_rows(wt_custom)

Sea Surface Temperature

Load in the sea surface temperature data from NOAA’s website. A custom function has been written that will automatically download the data from the the online servers, then crop the data. If the original or cropped version already exists on your local computer it will not re-download.

#define the place where full files should be saved
full_file_loc <- glue("{data_path}/raw/")

#define the place where cropped files should be saved
cropped_file_loc <- glue("{data_path}/processed/")

#define the area in which you want to crop data to
n3_region <- n3_region

#download, save, crop, and build the data
n3_sst <- sst_extract(full_file_loc, cropped_file_loc, n3_region)

#update the crs of the data to match the n3 region geometry
n3_sst <- st_warp(n3_sst, crs = "EPSG:7855")

Analyse Data

Now all the data loading has been completed we can begin the analysis. Key steps that occur in this section of the script are:

  • Creation of a “Key Table” (a table that contains all layer names, dates, and associated info (e.g. financial year)).
  • Creation of a summary table that will store all important data.
  • Creation of a report table that contains only the essentials needed for presentation in the technical report.

Key Table

The NetCDF/Raster format provides data that is stacked into layers. Each layer contains a grid of 197 (across) by 147 (down) cells. There are 492 layers in this data set. Each layer has its own name, characteristics, and information.

Below we create a table to provide insight into each layer of data. Using the layer_date column confirm that the data is current.

You can think of this table as a “key” that we will use to determine what layer names are associated with your choice of year, month, day, or financial year. Knowing the name of the layer is essential to select the layer from the data. For example, filtering by fyear_end = 2025 gives us all the relevant layer names.

#create a tibble of layer names and dates
name_date_tbl <- tibble("LayerName" = names(n3_sst), "LayerDate" = time(n3_sst))

#add extra info (fyear, year, month, etc.)
name_date_tbl <- name_date_tbl |> mutate(Year = year(LayerDate),
                                         Month = month(LayerDate, label = T),
                                         Fyear = case_when(month(LayerDate) > 6 ~ Year + 1, T ~ Year))

Summary Table

The first analysis we will do is to create a summary table that provides an overview of all important aspects. This table will include information per basin on:

  • Monthly mean
  • Annual mean
  • Long-term temperature
  • Monthly percentile rank
  • Annual percentile rank
  • Anomaly (+/- the ltm)
  • Percentage of ltm

Each of these components require a bit of work.

Means

First up is mean the exact means we are going to calculate are as follows:

  • Monthly Mean: This is the SPATIAL mean of all the sea surface temperature cells across the region We will calculate this using the exact_extract(). For example, if there were 3 temperature cells in the Dry Tropics Marine Region for the date of 01/01/2020, that had the values 25C, 26C, and 27C. Then the monthly mean value for the Dry Tropics Marine Region for the date of 01/01/2020 would be 26C.
  • Annual Mean: This is calculated as the MEAN of the monthly mean values for each region. Note, because it is the mean of the SPATIAL mean monthly values, its both the mean of the month AND the spatial mean of all temperatures experienced across the entire region.
  • Long-Term Mean (LTM): To understand if the current year had higher, or low, temperatures, we need to compare it against something. The LTM is what we compare it against. The LTM is calculated by taking the mean of the annual mean values for each basin from a 30-year block of data known as a climate normal (more on this in the LTM section below).

Monthly Mean

This happens first at the monthly time frame. It is important to note that the monthly MEAN is the mean of all the sea surface temperature cells across the region. I.e. This is a spatial mean.

To get data from we need to specify the layers we are interested in. We use the “key table” from earlier to help here. For example, below I specify I want layers from the 2025 financial year. (This can be updated by changing the globally set script_fyear variable).

#get mean from all layers and convert into a table, clean column names and add metadata
n3_sst_tbl <- n3_sst |>
  aggregate(n3_marine, FUN = mean, na.rm = TRUE) |> 
  as.data.frame() |> 
  rename("MonthlyMeanSst" = 3, "LayerDate" = "time") |> 
  left_join(name_date_tbl, by = "LayerDate") |> 
  mutate(Region = rep(n3_marine$Region, nrow(name_date_tbl))) |> 
  select(-geom)

#remove financial years without a full set of data (usually just 1911, and sometimes the most recent fyear).
removal_rows <- n3_sst_tbl |> 
  group_by(Region, Fyear) |> 
  summarise(MonthCount = length(unique(Month))) |> 
  ungroup() |> 
  filter(MonthCount < 12)

#subtract from the main table, any rows that appear in the removal table
monthly_region_sst <- n3_sst_tbl |> 
  anti_join(removal_rows, by = "Fyear")

Annual Mean

We can then easily calculate the annual air temperature by taking the mean of the monthly mean values.

It is important to note that the monthly MEAN values (which as covered above, are the spatial mean of all sea surfance temperature cells within the region). Thus, the mean annual air temperature for the basin, is also a spatial mean of all air temperature cells in the basin.

#calculate financial year annual rainfall statistics
annual_region_sst <- monthly_region_sst |> 
  group_by(Region, Fyear) |> 
  summarise(AnnualMeanSst = round(mean(MonthlyMeanSst), 1)) |> 
  ungroup()

#combine the monthly and annual tables
annual_region_sst <- left_join(monthly_region_sst, annual_region_sst)

#clean up
rm(monthly_region_sst)

We now need to take a moment to store all of this historic data to the side, as for one plot later on we will need every year.

all_years_sst <- annual_region_sst

However, for the most part. we only need to keep the current year of data, and the 30 years of data that will be used for the LTM (below).

#remove everything we don't need
annual_region_sst <- annual_region_sst |> 
  filter(Fyear %in% c((1991:2020), script_fyear))

Long Term Mean

To understand if the current year had high, or low, sst, we need to compare it against something. The LTM is what we compare it against. The LTM is calculated by taking the mean of the annual mean values for each region from a 30-year block of data known as a climate normal.

For sea surface temperature data we don’t have extensive pre-industrial data. So the 30-year climate normal that we are using is 1991 to 2020 (same as the rainfall analysis).

Something important to note is that, because we are working on the financial year for our results, the LTM will also be based on the financial year. So the 30-year period 1991 to 2020 is more specifically from 1st July 1990 to 30th June 2020.

#select our 30 year reference period and calculate the ltm values (note we have to create this as a separate dataset to not accidentally grab the current year of data if it sits outside the LTM period).
climate_normal <- annual_region_sst |> 
  filter(Fyear %in% (1991:2020)) |>
  group_by(Region) |> 
  mutate(AnnualLtm = round(mean(AnnualMeanSst), 1)) |> 
  group_by(Region, Month, AnnualLtm) |> 
  summarise(MonthlyLtm = round(mean(MonthlyMeanSst), 1)) |> 
  ungroup()

#bind 30 ltm climate normal values to the main dataset
annual_region_sst <- left_join(annual_region_sst, climate_normal)

#clean up
rm(climate_normal)

Percentiles (Monthly and Annual)

Now that the three types of means have been calculated we can work on determining the percentiles for the data.

It is important to note here that the percentiles are calculated only from the same 30 years of data that are used by the LTM. This is a change from how we previously calculated percentiles (using the entire dataset).

#calculate the percentile ranks for the mean sst in each basin each month. Percentiles are calculated from all years of data
annual_region_sst <- annual_region_sst |> 
  group_by(Region, Month) |> 
  mutate(MonthlyMeanSstPercentileRank = round(percent_rank(MonthlyMeanSst)*100, 1)) |> 
  ungroup() |> 
  group_by(Region) |> 
  mutate(AnnualMeanSstPercentileRank = round(percent_rank(AnnualMeanSst)*100, 1)) |>  
  ungroup()

Anomalies

Once we have calculated the LTM we can then compare the current year of data against the LTM and add the last lot of statistics to the summary table which are the current years anomaly from the ltm and percentage of the ltm.

#compare LTM and current year data
summary_tbl <- annual_region_sst |> 
  mutate("AnnualAnomaly" = round((AnnualMeanSst - AnnualLtm), 1),
         "MonthlyAnomaly" = round((MonthlyMeanSst - MonthlyLtm), 1),
         "AnnualPercentageOfLtm" = round(((AnnualMeanSst/AnnualLtm)*100), 1),
         "MonthlyPercentageOfLtm" = round(((MonthlyMeanSst/MonthlyLtm)*100), 1)) |> 
  ungroup()

#clean up
rm(annual_region_sst)

Save Summary Table

With the summary table (containing all relevant statistics) now completed we can save that table to our output location.

Remember, this summary table should only include the 30 years of data for the LTM period, and the single year that is the current fyear for the script.

#save to the main output folder
write_csv(summary_tbl, glue("{output_path}/sea-surface-temperature_summary.csv"))

Monthly Percentiles Table

The final table we need to create is the simplified percentiles table that will be directly put into the technical report. This table contains the monthly basin percentiles for the current year, and the annual percentile. However before saving, the data needs to be adjusted to fit into the following groupings:

  • “Lowest 1%”: percentile rank \(\le\) 1
  • “Very much below average”: percentile rank \(\gt\) 1 to \(\lt\) 10
  • “Below average”: percentile rank = 10 to \(\lt\) 30
  • “Average”: percentile rank = 30 to \(\lt\) 70
  • “Above average”: percentile rank = 70 to \(\lt\) 90
  • “Very much above average”: percentile rank = 90 to \(\lt\) 99
  • “Highest 1%”: percentile rank \(\ge\) 99
#filter for current year and drop unnecessary columns
monthly_percentiles_tbl <- summary_tbl |> 
  filter(Fyear == script_fyear) |> 
  select(c(Region, Month, MonthlyMeanSstPercentileRank, AnnualMeanSstPercentileRank))

#standardise values for each group
monthly_percentiles_tbl <-  monthly_percentiles_tbl |> 
  mutate(across(where(is.numeric), ~ case_when(. <= 1 ~ 1,
                                               . > 1 & . < 10 ~ 2,
                                               . >= 10 & . < 30 ~ 3,
                                               . >= 30 & . < 70 ~ 4,
                                               . >= 70 & . < 90 ~ 5,
                                               . >= 90 & . < 99 ~ 6,
                                               . >= 99 ~ 7)))

#pivot data wider for presenting
monthly_per_wide <- pivot_wider(monthly_percentiles_tbl, names_from = Month, values_from = MonthlyMeanSstPercentileRank) |> 
  relocate(AnnualMeanSstPercentileRank, .after = last_col())

Before we save this table, we will use a custom function to create an excel workbook that embeds coloring rules into the output. This function relies on a R package (openxlsx2) that is currently in the development stage, and may or may not run smoothly. An overview of the custom function (called cond_form_climate()) is as follows:

cond_form_climate(df, file_name, indicator)

Where:

  • df: any tbl or data.frame - although this function will obviously only work with the monthly climate tables
  • file name: whatever you want the output file to be named
  • indicator: can chose from three options: rainfall, air_temperature, or sea_surface_temperature (changes the colour scheme)
#use a custom func from RcTools
save_n3_table(
  df = monthly_per_wide,
  file_name = glue("{output_path}/sea-surface-temperature_monthly-percentiles"),
  target_columns = 2:ncol(monthly_per_wide),
  target_rows = 1:nrow(monthly_per_wide),
  scheme = "Temperature"
)

Visualise Data

The final component of this script is to visualise sst data, using both plots and maps. Below we will create:

  • Line plots of long term annual rainfall for each basin
  • Line plots of the current year of rainfall for each basin
  • Maps of the current year of rainfall for each basin
  • Maps of the current years’ anomaly from long term trends for each basin

All Historical Data Plot

The standard plot that we create for all climate indicators is a line plot of data over time - to see long-term trends. This plot is currently included as appendix material for the technical reports.

for (i in n3_marine$Region){
  
  #pick out data based on the basin name
  region_temp <- all_years_sst |> filter(Region == i)
  
  #get the ltm (30-year) for the region
  region_ltm <- summary_tbl |> filter(Region == i, Fyear %in% (1991:2020)) |> 
    select(AnnualLtm) |> max()
  
  #Set up the background data frame
  groups <- factor(c("+2.5 to +2°C", "+2 to +1.5°C", "+1.5 to +1°C", "+1 to +0.5°C", "+0.5 to 0°C", "0 to -0.5°C", 
              "-0.5 to -1°C", "-1 to -1.5°C", "-1.5 to -2°C", "-2 to -2.5°C"),
              levels = c("+2.5 to +2°C", "+2 to +1.5°C", "+1.5 to +1°C", "+1 to +0.5°C", "+0.5 to 0°C", "0 to -0.5°C", 
                         "-0.5 to -1°C", "-1 to -1.5°C", "-1.5 to -2°C", "-2 to -2.5°C"))
  
  transformer <- c(2.5, 2, 1.5, 1, 0.5, -0.5, -1, -1.5, -2, -2.5)
  
  x = rep(c(min(region_temp$Fyear), max(region_temp$Fyear) + 1), each = length(groups))
  
  #build a data frame for the background
  df <- data.frame(X = x, Groups = groups, Transformer = transformer)
  
  #create hi and low lims
  df <- df |> mutate(Y = region_ltm + Transformer) |> 
    mutate(Ylo = case_when(Transformer > 0 ~ Y - 0.5, TRUE ~ Y)) |> 
    mutate(Yhi = case_when(Transformer > 0 ~ Y, TRUE ~ Y + 0.5))
  
  #get min, max and break values for breaks in the background
  min_per <- min(df$Ylo)
  max_per <- max(df$Yhi)
  breaks <- unique(df$Yhi)
  
  #get max two values
  max_2 <- head(unique(sort(df$Yhi, decreasing = T)),2)
  
  #use these breaks to calculate the perfect spot for an annotation label
  label_location <- max_2[1] - (max_2[1]-max_2[2])/2
  
  #create colour palette
  col_palette <- rep(brewer.pal(length(groups), "RdBu"),2)  
  
  #plot the background layer
  background <- ggplot(df) +
    geom_ribbon(aes(x = X, ymin = Ylo, ymax = Yhi, fill = Groups), alpha = 1) +
    geom_line(aes(x = X, y = Y, color = Groups)) +
    scale_color_manual(values = col_palette, name = "°C +/- Long-Term Mean") + 
    scale_fill_manual(values = col_palette, name = "°C +/- Long-Term Mean")
  
  #create the main plot
  percent_df_plot <- background +
    geom_point(data = region_temp, mapping = aes(x = Fyear, y = AnnualMeanSst), colour = "black") +
    geom_line(data = region_temp, mapping = aes(x = Fyear, y = AnnualMeanSst), colour = "black") + 
    geom_hline(aes(yintercept = region_ltm, linetype = glue("{region_ltm}°C")), colour = "red") +
    scale_linetype_manual(name = "Long-Term Mean", values = 1) +
    geom_vline(xintercept = 1990, linetype = "dashed", colour = "blue") +
    geom_vline(xintercept = 2020, linetype = "dashed", colour = "blue") +
    annotate(geom = "label", x = 2004, y = label_location, label = "30-Year Climate Normal", 
             size = 3, hjust = 0.4, fill = "blue", colour = "white", fontface = "bold") +
    scale_y_continuous(name = "Sea Surface Temperature (°C)", limits = c(min_per, max_per), breaks = breaks, expand = c(0, 0)) +
    scale_x_continuous(name = "Financial Year (ending)", expand = c(0, 0)) + 
    ggtitle(glue("Mean annual sea surface temperature in the {i} region since 1985")) +
    theme_bw() + theme(panel.grid.major = element_blank(), 
                       panel.grid.minor = element_blank()) +
    theme(plot.title = element_text(hjust = 0.5))
  
  #edit variable name for better save path
  i_edit <- tolower(gsub(" ", "-", gsub("'", "", i)))
  
  #save the static plot
  ggsave(percent_df_plot, filename = glue("{output_path}/plots/{i_edit}-region_yearly_sea-surface-temperature.png"), 
       height = 7, width = 12)

}

long term annual plots are now saved, see below for an example.

percent_df_plot

Current Year Plot

A newer plot that we are looking at creating is a plot of the current year (monthly) compared to the long term expected value for each month. This plot is not currently included in the technical report but may be so in the future.

#create ltm and confidence band data
summary_tbl <- summary_tbl |>
  mutate(MonthNumb = case_when(Month == "Jan" ~ 7, Month == "Feb" ~ 8, Month == "Mar" ~ 9,
                                Month == "Apr" ~ 10, Month == "May" ~ 11, Month == "Jun" ~ 12,
                                Month == "Jul" ~ 1, Month == "Aug" ~ 2, Month == "Sep" ~ 3,
                                Month == "Oct" ~ 4, Month == "Nov" ~ 5, Month == "Dec" ~ 6))

#initialize plotting loop
for (i in n3_marine$Region) {
  
  #get a 30-year normal table
  selected_region <- summary_tbl |> filter(Region == i, Fyear %in% (1991:2020))
  
  #get the 99th and 1st values for each month
  max_per <- selected_region |> 
    group_by(MonthNumb) |> 
    summarise(MaxSst = quantile(MonthlyMeanSst, 0.99),
              MinSst = quantile(MonthlyMeanSst, 0.01))
  
  #get the 90th and 10th values for each month 
  outer_per <- selected_region |> 
    group_by(MonthNumb) |> 
    summarise(MaxSst = quantile(MonthlyMeanSst, 0.9),
              MinSst = quantile(MonthlyMeanSst, 0.1))
  
  #get the 30th and 70th values for each month 
  inner_per <- selected_region |> 
    group_by(MonthNumb) |> 
    summarise(MaxSst = quantile(MonthlyMeanSst, 0.7),
              MinSst = quantile(MonthlyMeanSst, 0.3))
  
  #get a current year table
  cy_region <- summary_tbl |> filter(Region == i, Fyear == script_fyear)
  
  #plot the graph
  plot <- ggplot() +
    geom_ribbon(data = max_per, aes(x = MonthNumb, ymin = MinSst, ymax = MaxSst, fill = "#fae4d2")) +    # Add shaded ribbon
    geom_ribbon(data = outer_per, aes(x = MonthNumb, ymin = MinSst, ymax = MaxSst, fill = "#fabf8c")) +  # Add shaded ribbon
    geom_ribbon(data = inner_per, aes(x = MonthNumb, ymin = MinSst, ymax = MaxSst, fill = "#ed872d")) +  # Add shaded ribbon
    geom_smooth(data = selected_region, aes(x = MonthNumb, y = MonthlyMeanSst, color = "black"), se = F, linewidth = 1.5) +
    geom_line(data = cy_region, aes(x = MonthNumb, y = MonthlyMeanSst, colour = "red"), linewidth = 1.5, show.legend = T) +
    scale_fill_identity(name = "Percentile", labels = c("30th-70th", "10th-90th", "1st-99th"), guide = "legend") +
    scale_color_identity(name = "Temperature", labels = c("Long-Term Mean", "Financial Year"), guide = "legend") +
    scale_x_continuous(name = "", breaks = 1:12, labels = cy_region$Month, expand = c(0, 0)) +
    scale_y_continuous(name = "Temperature (°C)", expand = c(0, 0)) +
    ggtitle(glue("Monthly temperature in the {i} basin for the {script_fyear} financial year")) +
    theme(panel.background = element_blank(), panel.border = element_blank(), panel.grid.major = element_blank(),
          axis.title.x = element_blank(), axis.line = element_line(colour = "black"),
          plot.title = element_text(hjust = 0.5))

  #edit target basin variable slightly for better save path
  i <- tolower(gsub(" ", "-", gsub("'", "", i)))
  
  ggsave(glue("{output_path}/plots/{i}-region_monthly_sea-surface-temperature.png"), plot, width = 10, height = 4)
}

These plots are much simpler and might be useful for more educational/quick-read pieces.

plot

Current Year and Anomaly Maps

Another staple of the technical report is a map of the mean annual sea surface temperature for each region for the current year. To compliment this map we will also create a map of the current years’ anomaly from the long term mean annual sst for each region, and them plot them side-by-side.

In the first code chunk we create each of the raster layers required.

#use the aggregation helper to group data into annual layers
n3_sst_annual <- nc_aggregation_helper(n3_sst, "Annual")

#pull out the most recent layer
cy_sst_map <- n3_sst_annual[,,,which(st_get_dimension_values(n3_sst_annual, "time") %in% script_fyear)]

#and pull out the 30 year ltm layers then get their overall average
ltm_30_stack <- n3_sst_annual[,,,which(st_get_dimension_values(n3_sst_annual, "time") %in% 1991:2020)]
ltm_30_mean <- st_apply(ltm_30_stack, c(1,2), mean, na.rm = TRUE)

#subtract the ltm from the current to determine the anomaly
anom_sst_map <- cy_sst_map - ltm_30_mean

Calculate Legend Values

Then we need to determine the min and max values to use for the legend for the anomaly map.

It is important to note here that it was decided that the anomaly maps require a consistent legend between years (so they also share a colour scheme - i.e. shades of red in all maps is associated with the same C temperature values). It was also decided that this was not necessary for the current year air temperature maps.

Work was done to experiment with a range of options to determine the best min and max values to use and it was decided to use the min and max anomalies values that have been recorded in the 30-year climate normal period.

This is actually really easy to do as well (in this specific example, other options were a right pain).

Extra Note - it seems that each year past 2020 has been in itself an extreme. So we will simply include these years in the assessment.

#compare every year against the 30 year ltm
all_anom_layers <- purrr::map(seq(dim(n3_sst_annual)[[3]]), \(x) {n3_sst_annual[,,,x] - ltm_30_mean})

#stack all comparisons into a single stars object
sst_all <- do.call(c, c(all_anom_layers, list(along = "time")))

#use the high res crop function to crop and increase resolution at the same time
sst_all_crop <- nc_high_res_crop(sst_all, n3_marine, 5)
cy_sst_map_crop <- nc_high_res_crop(cy_sst_map, n3_marine, 5)
anom_sst_map_crop <- nc_high_res_crop(anom_sst_map, n3_marine, 5)

We can then plot the current year, and current year anomaly, sst data at a region level and basin level.

#change name of layer for plot
names(cy_sst_map_crop) <- "Mean SST (°C)"
names(anom_sst_map_crop) <- "Anom. SST (°C)"

#create some vectors of objects in the global environment to iterate over
map_type <- c("anom", "cy")
pal_type <- c("-brewer.rd_bu", "brewer.reds")
mid_type <- list(0, NULL)
break_type <- c("anom_break", "cy_break")

#using unique regions created earlier
for (i in n3_marine$Region) {

  #filter all marine zones by region
  target_region <- n3_marine |> filter(Region == i)
  
  #get the associated basins
  region_basins <- n3_basins |> filter(Region == i)

  #crop to the specific region
  cy <- st_crop(cy_sst_map_crop, target_region)
  anom <- st_crop(anom_sst_map_crop, target_region)
  all_anoms <- st_crop(sst_all, target_region)
  
  #calculate the breaks for the cy legend based on the min and max for the actually cy data
  cy_break <- plyr::round_any(
    seq(
      from = min(cy[[1]], na.rm = TRUE), 
      to = max(cy[[1]], na.rm = TRUE), 
      length.out = 11), 
    0.01)

  #calculate the breaks for the anom legend based on the min and max for all years of data compared to the ltm
  anom_break <- plyr::round_any(
    seq(
      from = min(all_anoms[[1]], na.rm = TRUE), 
      to = max(all_anoms[[1]], na.rm = TRUE), 
      length.out = 11), 
    0.1)
  
  for (j in 1:2){
    
    #create a map of the area
    map <- tm_shape(get(map_type[j])) +
      tm_raster(col.scale = tm_scale_intervals(values = pal_type[j], 
                                               style = "fixed", 
                                               breaks = get(break_type[j]),
                                               midpoint = mid_type[[j]]),
                col.legend = tm_legend(reverse = T)) +
      tm_shape(qld) +
      tm_polygons(fill = "grey80", 
                  col = "black") +
      tm_shape(target_region, is.main = T) +
      tm_borders(col = "black") +
      tm_shape(region_basins) +
      tm_polygons(fill = "grey90", 
                  col = "black") +
      tm_text("Basin", 
              xmod = -1.8, 
              ymod = 0.1, 
              size = 0.7,
              options = opt_tm_text(shadow = T)) +
      tm_layout(legend.frame = T, 
                legend.bg.color = "White", 
                legend.text.size = 0.7, 
                legend.position = c("right", "bottom"),
                asp = 1.1)
    
    #edit variable name for better save path
    i_edit <- tolower(gsub(" ", "-", gsub("'", "", i)))
    
    #save map for later
    assign(glue("{map_type[j]}_mean_map_{i_edit}_region"), map)
      
    #save the map as a png
    tmap_save(map, filename = glue("{output_path}/maps/{i_edit}-region_{map_type[j]}_sea-surface-temperature.png"))
    
  }
}

With an example output that looks like this (the actually outputs look much better, without overlap etc.)

map

Finally, we can combined all of these plots into side-by-side versions.

for (i in unique(n3_marine$Region)) {
  
  #edit variable name for better save path
  i <- tolower(gsub(" ", "-", gsub("'", "", i)))
  
  #grab two of the maps
  x <- get(glue("cy_mean_map_{i}_region"))
  y <- get(glue("anom_mean_map_{i}_region"))
  
  #combine them
  map <- tmap_arrange(x,y)
  
  #save the map as a png
  tmap_save(map, filename = glue("{output_path}/maps/{i}-region_sea-surface-temperature_facet-map.png"))
}

here is an example of how the maps look.

map

Script complete :)

Our Partners

Please visit our website for a list of HWP Partners.

Icons of all HWP partners  

A work by Adam Shand. Reuse: CC-BY-NC-ND.

to@drytropicshealthywaters.org

 

This work should be cited as:
Adam Shand, "[document title]", 2024, Healthy Waters Partnership for the Dry Tropics.